18 research outputs found

    GP-HD: Using Genetic Programming to Generate Dynamical Systems Models for Health Care

    Full text link
    The huge wealth of data in the health domain can be exploited to create models that predict development of health states over time. Temporal learning algorithms are well suited to learn relationships between health states and make predictions about their future developments. However, these algorithms: (1) either focus on learning one generic model for all patients, providing general insights but often with limited predictive performance, or (2) learn individualized models from which it is hard to derive generic concepts. In this paper, we present a middle ground, namely parameterized dynamical systems models that are generated from data using a Genetic Programming (GP) framework. A fitness function suitable for the health domain is exploited. An evaluation of the approach in the mental health domain shows that performance of the model generated by the GP is on par with a dynamical systems model developed based on domain knowledge, significantly outperforms a generic Long Term Short Term Memory (LSTM) model and in some cases also outperforms an individualized LSTM model

    Visible Decomposition: Real-Time Path Planning in Large Planar Environments

    Get PDF
    We describe a method called Visible Decomposition for computing collision-free paths in real time through a planar environment with a large number of obstacles. This method divides space into local visibility graphs, ensuring that all operations are local. The search time is kept low since the number of regions is proved to be small. We analyze the computational demands of the algorithm and the quality of the paths it produces. In addition, we show test results on a large simulation testbed

    Hoeffding Races--model selection for MRI classification

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (leaves 58-61).by Oded Maron.M.S

    Learning from Ambiguity

    Get PDF
    There are many learning problems for which the examples given by the teacher are ambiguously labeled. In this thesis, we will examine one framework of learning from ambiguous examples known as Multiple-Instance learning. Each example is a bag, consisting of any number of instances. A bag is labeled negative if all instances in it are negative. A bag is labeled positive if at least one instance in it is positive. Because the instances themselves are not labeled, each positive bag is an ambiguous example. We would like to learn a concept which will correctly classify unseen bags. We have developed a measure called Diverse Density and algorithms for learning from multiple-instance examples. We have applied these techniques to problems in drug design, stock prediction, and image database retrieval. These serve as examples of how to translate the ambiguity in the application domain into bags, as well as successful examples of applying Diverse Density techniques

    On the impact of the cutoff time on the performance of algorithm configurators

    Get PDF
    Algorithm conigurators are automated methods to optimise the parameters of an algorithm for a class of problems. We evaluate the performance of a simple random local search conigurator (Param- RLS) for tuning the neighbourhood size k of the RLS k algorithm. We measure performance as the expected number of coniguration evaluations required to identify the optimal value for the parameter. We analyse the impact of the cutof time κ (the time spent evaluat- ing a coniguration for a problem instance) on the expected number of coniguration evaluations required to ind the optimal parameter value, where we compare conigurations using either best found itness values (ParamRLS-F) or optimisation times (ParamRLS-T). We consider tuning RLS k for a variant of the Ridge function class ( Ridge* ), where the performance of each parameter value does not change during the run, and for the OneMax function class, where longer runs favour smaller k . We rigorously prove that ParamRLS- F eiciently tunes RLS k for Ridge* for any κ while ParamRLS-T requires at least quadratic κ . For OneMax ParamRLS-F identiies k = 1 as optimal with linear κ while ParamRLS-T requires a κ of at least Ω ( n log n ) . For smaller κ ParamRLS-F identiies that k > 1 performs better while ParamRLS-T returns k chosen uniformly at random

    Using Errors to Create Piecewise Learnable Partitions

    No full text
    In this paper we describe an algorithm which exploits the error distribution generated by a learning algorithm in order to break up the domain which is being approximated into piecewise learnable partitions. Traditionally, the error distribution has been neglected in favor of a lump error measure such as RMS. By doing this, however, we lose a lot of important information. The error distribution tells us where the algorithm is doing badly, and if there exists a "ridge" of errors, also tells us how to partition the space so that one part of the space will not interfere with the learning of another. The algorithm builds a variable arity k-d tree whose leaves contain the partitions. Using this tree, new points can be predicted using the correct partition by traversing the tree. We instantiate this algorithm using memory based learners and cross-validation. 1 Motivation Traditionally, the use of error in learning algorithms has been limited to improving the learner (through gradient descen..

    Visible Decomposition: Real-Time Path Planning in Large Planar Environments

    No full text
    We describe a method called Visible Decomposition for computing collision-free paths in real time through a planar environment with a large number of obstacles

    Abstract

    No full text
    Selecting a good model of a set of input points by cross validation is a computationally intensive process, especially if the number of possible models or the number of training points is high. Techniques such as gradient descent are helpful in searching through the space of models, but problems such as local minima, and more importantly, lack of a distance metric between various models reduce the applicability of these search methods. Hoe ding Races is a technique for nding a good model for the data by quickly discarding bad models, and concentrating the computational e ort at di erentiating between the better ones. This paper focuses on the special case of leave-one-out cross validation applied to memory-based learning algorithms, but we also argue that it is applicable to any class of model selection problems.
    corecore